专利摘要:
A mechanism and method for allowing at least one client (40) to access data in a shared memory (22) includes allocating data present in the shared memory (22), the memory (22) being configured in a plurality of buffers (36), and access to the data by a client (40) or a server (50) without data locking or data access limitation.
公开号:FR3025907A1
申请号:FR1558362
申请日:2015-09-09
公开日:2016-03-18
发明作者:Christian Reynolds Decker;Troy Stephen Brown;Kevin Brett Chapman
申请人:GE Aviation Systems LLC;
IPC主号:
专利说明:

[0001] 1 Mechanism and method for enabling communication between a client and a server by accessing message data in shared memory A replaceable equipment (LRU) is a modular element of a larger assembly such as a vehicle or an aircraft and is designed to specifications to ensure that it can be exchanged and / or replaced in the event of a failure. For example, the LRUs of an aircraft may include fully autonomous systems, sensors, radios, and other ancillary equipment for managing and / or performing functions of the aircraft. In the environment of an aircraft, the LRUs may be designed to operate according to particular operating, interoperability and / or scale factor criteria such as those defined by the ARINC series standards. A plurality of LRUs may be connected by a data network to access data or exchange data in a shared, shared memory of a flight control computer or other computer system. The flight control computer or other computer system may further manage and / or perform functions of the aircraft. In a first embodiment, a mechanism for enabling communication between at least one client and at least one server by accessing message data in shared memory includes allocating data present in the shared memory to at least one mailslot. allocation being accessible by a predetermined constant address, and a series of buffers for the client / each of the clients, each of the memories 3025907 2 buffers can be controlled by one or the other client (s) and server (s) ), the mailslot (s) having references identifying the client (s) and the server (s), the client (s) having an active access pointer that allows the (x) client (s) to directly manipulate message data via a buffer controlled by the client (s), the server (s) having an active access pointer that allows the (x) ) server (s) to directly manipulate the message data by via a buffer controlled by the server (s). The 10 active access pointers are allocated among buffers using atomic operations without copying the data at an operating system level. In another embodiment, a method for enabling communication between at least one client and a server 15 by accessing message data in shared memory, the method includes assigning data present in the shared memory to at least one mailslot. , the allocation of a single predetermined address to access the mailslot / each mailslot, the allocation of a number of buffers for the client / each client, each buffer can be ordered by a client or by a server, the number of buffers being equal to the number of transactions requested by the respective client, and the assignment of an active client access pointer from a client-controlled buffer to pass the command from buffer by the client to a buffer command by the server, which allows the server to directly manipulate the message data through an active server access point. Access to the message data is by means of active access pointers to the buffers without copying the message data at an operating system level. The invention will be better understood from the detailed study of some embodiments taken by way of nonlimiting examples 5 and illustrated by the appended drawings in which: FIG. 1 is a schematic view of the aircraft and the aircraft network; communication according to one embodiment of the invention; Figure 2 is a schematic illustration of a communication between a plurality of clients and / or servers accessing the shared memory, according to an embodiment of the invention; Figure 3 is a schematic illustration of clients accessing the buffers of a mailslot, according to one embodiment of the invention; FIG. 4 is a schematic illustration of unidirectional and bidirectional memory spaces, according to one embodiment of the invention; Figure 5 is a schematic view of a mechanism for clients to access the buffered message data according to one embodiment of the invention; FIG. 6 is a schematic view of a mechanism enabling clients to perform a read / write transaction in a buffer memory according to one embodiment of the invention; and FIG. 7 is a schematic view of a mechanism for directing a client to the secure buffer according to one embodiment of the invention. The disclosed embodiments of the present invention are illustrated in the context of an aircraft having a data network interconnecting a common or shared memory accessible to a plurality of sensors, systems, software components and / or physical components of the data carrier. aircraft. However, embodiments of the invention can be implemented in any context using clients and servers accessing common or single shared memory. Moreover, although we mention hereinafter "clients" and "servers", the particular embodiments described are non-limiting examples of clients and servers. Additional examples of clients and servers may include remote discrete units (via a data network or the Internet) or localized ones, applications, computer processes, processing threads, etc., or any combination thereof. ci, which access a shared memory. For example, a plurality of "clients" 15 may all be installed in the same computer or processing unit, accessing a common RAM (RAM). FIG. 1 shows an aircraft 8 having a fuselage 10 and at least one turbine engine, represented in the form of a left engine system 12 and a right engine system 14.
[0002] The left and right engine systems 12, 14 may be substantially identical. Although turbine engines 12, 14 are shown, the aircraft may comprise a smaller or larger number of engine systems, or other possible propulsion engine systems, such as propeller engines. In addition, the aircraft 8 shown has a plurality of sensors, systems and components collectively referred to as ground-handling equipment (LRU) 18, and at least one server 20 or computing unit, represented in the form of two management systems. flight, or flight control computers, located close to each other, near the nose of the aircraft 8. At least one of the servers 20 may include a memory 22. The LRU 18 and the servers 20 can communicate with each other via transmission and / or communication lines defining a data communication network 24, traversing at least part of the aircraft 8. Examples of the LRU 18 are the management systems flight and / or on-board maintenance systems. Additional LRUs 18 may be included. Although a server 20 is described, embodiments of the invention may include any computer system, flight computer, or display system 10 displaying data from multiple systems. The memory 22 may comprise a random access memory (RAM), a flash memory or one or more different types of portable electronic memory, etc., or any suitable combination of these types of memory. The LRUs 18 and / or the servers 20 may cooperate with the memory 22 such that the LRUs 18 and / or the servers 20, or any computer programs or processes therein, can access at least a portion memory 22 (for example, "shared memory" 22). As used herein, "programs" and / or "processes" may comprise all or part of a computer program having a set of executable instructions for controlling the management and / or operation of the respective client. and / or respective server and / or respective functions of the aircraft 8. The program and / or the processes may / may comprise a computer program which may include computer-readable media for executing or having executed by computer instructions or data structures stored on them. These computer-readable media may be any existing media accessible to a versatile or specific computer or other machine with a processor. Overall, such a computer program may include routines, programs, objects, components, data structures, algorithms, etc., which have the technical effect of performing particular tasks or implementing types. abstract data. Computer executable instructions, corresponding data structures and programs are examples of program code for performing the information exchange as presented here. The computer executable instructions may include, for example, instructions and data, which cause a multipurpose computer, a specific computer, a controller, or a specific processing machine to perform a certain function or group of functions. The aircraft 8 shown in FIG. 1 is only a schematic representation of one embodiment of the invention and serves to illustrate that a plurality of LRUs 18 and servers 20 may be everywhere in 8. The exact location of the LRUs 18 and servers 20 is unrelated to the embodiments of the invention. In addition, a larger or smaller number of LRUs 18 and / or servers 20 may be included in embodiments of the invention. The communication network 24 is represented in the form of a bus but may include a number of connectors and data communication interfaces, for example Ethernet or fiber optic cables, and routing and transmission components. / In addition, the configuration and operation of the communication network 24 may be defined by a common set of standards or regulations applicable to environments aeronautics. For example, the communication network 24 on board an aircraft 8 can be defined by, and / or configured according to, the ARINC 664 standard (A664), or the ARINC 653 standard (A653). Figure 2 is a schematic illustration of a data communication system 24 according to one embodiment of the invention. A plurality of LRUs 18, each comprising one or more computer threads or processes 26, have access to shared memory 22, represented as a shared RAM. In addition, one or more servers 20, each comprising one or more computer threads or processes 28, also have access to the shared memory 22. In this way, each process 26, 28 can have access to the shared memory 22. 22 is shown further comprising a data assignment 30 to at least one group, or "mailslot" 32, placed at a predetermined constant addressable location of the memory, or "constant address" 34 of the memory 22. In the sense of In this description, a "mailslot" may include a predetermined subset of memory 22 assigned to a particular data storage use for the aircraft 8.
[0003] For example, a single mailslot 32 may include a single data assignment, including the aircraft's own speed, while another mailslot 32 may include a plurality of data elements with or without relation to each other, including crossing points or the current flight plan. Embodiments of the invention may include configurations in which each individual mailslot 32 uses the same message data definitions, or in which different message data definitions are used in different mailslots 32. The mailslot 32 may be organized in a sequential manner starting from the constant address 34, especially in the form of a single link list; however, additional organizational structures of the mailslot 32 may be designed to include matrices, variable allocations for each mailslot 32, etc., all starting from the location of the constant address 34. Each of the processes 26, 28, and / or respectively LRU 18 and servers 20 is previously configured to include the predetermined constant address 34 of the shared memory 22. In this sense, each process 26, 28, LRU 18 and / or server 20 is 10 preconfigured to identify the location of the constant address 34 and, therefore, the mailslot 32 or 32 whose data must be accessible. For the purposes of this description, each LRU 18 and / or LRU process 26 may be considered a "client" to access data in the shared memory 22, and each server 20 and / or server process 28 may be considered as a "server" for accessing data in shared memory 22. Additional embodiments may be included, in which servers 20 perform actions or functions similar to those of clients, and clients perform actions. or functions similar to those of servers 20. In this way, unless otherwise stated, "clients" and "servers" can perform interchangeable functions. Further, although the servers 20 and the LRUs 18 are shown as separate components, embodiments of the invention may include servers or clients installed in the same systems as each other and / or or installed in the same system as the shared memory 22. In one embodiment of the invention, the number of mailslot 32 in the shared memory 22 is predefined during the initialization of the memory 22, based on a number of messages. known from 3025907 9 mailslots 32 accessible to customers and / or servers. In another embodiment of the invention, the number of mailslot 32 is defined at the time of or during execution by the collective number of mailslot 32 accessible to clients and / or servers. In this sense, the number of mailslots 32 can be dynamic, increasing and decreasing as needed, or only additional when accessing additional mailslots 32. Referring now to Figure 3, shared memory 22 can communicate with a user. number of clients 40 and 10 servers 50. Each mailslot 32 of the shared memory 22 may further comprise a list of references 33 including at least one list of references for the client / each of the clients 40 and the server / each server 50 The reference list 33 may contain, for example, routing, source, and / or destination information associated with each of the respective clients 40 and / or servers 50, way, for example, a client 40 or a server 50 can consult the reference list 33 of the shared memory 22 in order to obtain at least one communication path to the other 20 servers 50 or 40 client respectively. In this way, the use of the constant address 34 and the known mailslot 32 having the reference list 33 facilitates communication between one or more client (s) 40 and / or server (s) 50 without it being necessary to define mechanisms for direct communication between the clients 40 and / or the servers 50 themselves. As schematically shown, the client / each client 40 includes an active access pointer 42 capable of identifying a specific addressable memory space, or a plurality of memory space groups, so that the client can access one or more memory (s) buffer (s). A first client 54 can access a first addressable memory space 55 associated with the first client 54 and including a number of buffers 36. In addition, a second client 56 can access a second addressable memory space 57 associated with the second client. 56 and including a second number of buffers 36. Each of the respective addressable memory spaces 55, 57 is identified and managed by its respective clients 54, 56 and / or the respective active access pointers 42 of its clients. Each of the different buffers 36 may be configured to store a predetermined amount of data needed for a particular data item. Embodiments of the invention may include configurations in which, for example, the first client 54 can access only its own memory space 55 and / or buffers 36 associated with a particular mailslot and therefore can not not access, for example, to the memory space 57 of the second client 56. In this way, each client 54, 56 is "owner" of its respective memory spaces 55, 57 even if the individual control of the buffers 36 can be attributed to other components. While the clients 40 may have limited access to their respective memory spaces 55, 57, the servers 50 can access the buffers 35 of any memory spaces 55, 57 of a client 40. The number of buffers 36 for each addressable memory space 55, 57 can be defined by the number of transactions requested by each respective client 54, 56. Optionally, the number of buffers 36 for each addressable memory space 55, 57 can be defined by the number of transactions requested by each respective customer 54, 56, plus an additional buffer 36. Thus, in the illustrated example, the first client 54 requested to perform two transactions in the shared memory 22 and was granted three buffers (36) (two plus one buffer in addition), while the second client 56 requested to perform three transactions in the shared memory 22 and was ccordé four memories 5 buffers (36) (three plus a buffer buffer in addition). In one embodiment of the invention, the number of buffers 36 in each addressable memory space 55, 57 and the size of each buffer 36 are predefined during the initialization of the shared memory 22, based on a known number. 10 customers 40 able to access the mailslot 32 and according to a known number of transactions. In another embodiment of the invention, the number of buffers 36 in each addressable memory space 55, 57 is defined at run time or during execution by the collective number of clients 40 accessing the mailslot. 32, and by the number of requested transactions. In this way, the number of buffers 36 can be dynamic, increasing or decreasing as needed, or only additional when additional clients 40 access mailslot 32 or transactions are requested.
[0004] In still other embodiments of the invention, the mailslot 32 and the addressable memory space 55, 57 may be independently configured. For example, the mailslot 32 may be defined as explained, but the addressable memory space 55, 57 is dynamically configured during execution, or vice versa. In the two examples, predefined or dynamically configured, the number of mailslot 32 and / or the configuration of the buffers 36 can be defined according to an algorithm or an executable program stored in the shared memory 22.
[0005] In addition, the server / each server 50 comprises an active access pointer 52 and can access a specific buffer 36 indicated by the active access pointer 52. For example, a server 50 can access the list of active access pointers 52. references 33 of the mailslot 32 5 which can identify a client 40 and / or an addressable memory space 55, 57 associated with this client 40, and the buffers 36 present therein. In the illustrated example, the first client 54 is associated with a first buffer memory 58. Embodiments of the invention may comprise a single server 10 communicating with each mailslot 32. FIG. 4 is a schematic view illustrating another configuration and another possible operation of the addressable memory space 55, 57 of a client. There is shown a unidirectional memory space 80 comprising at least one available buffer queue 82 managed by a client 40 (not shown) and a request buffer memory queue 84 managed by a server 50 (not shown). represent). The available buffer queue 82 may be configured to contain the maximum number of buffers 36 available in memory space 80, while the request buffer queue 84 may be configured to contain the number of buffers available. the maximum number of buffers 36 requested by client 40 (i.e., the maximum number of buffers 36 in memory space 80, minus one). In embodiments in which there are no buffers "in addition", the available buffer queue 82 and the request buffering queue 84 may be configured to contain the same number of buffers, equal to the maximum number of buffers 36 requested by the client 40.
[0006] In the illustrated example, the buffers 36 may contain the payload or the message being the subject of a transaction by the client 40 and / or the respective server 50 (s). When the client 40 performs unidirectional transaction requests (the fact that a transaction is waiting for an interaction of the server 50, eg "pending request", a buffer 36 for each request for transaction can be transferred to the request buffer queue 84 to wait for the transaction or processing by a server 50.
[0007] Once the server 50 performs and / or processes the requested transaction, the buffer 36 is relocated to the available buffer queue 82 for the client 40 to make further transaction requests. The client 40 may also perform additional transactions and / or process the message data of the buffer 36 to the return of the request buffer 84 before resetting the buffer 36 into the queue Available Buffers 82. For the purposes of this disclosure, buffers 36 allocated in the available buffer queue 82 may be considered "available" or "unoccupied" to initiate new transactions, while buffers 36 allocated in the request buffer queue 84 may be considered "unavailable" or "busy".
[0008] Furthermore, since the request buffer queue 84 may be configured with a queued space containing an available buffer of less than the available buffer queue 82, embodiments of the invention may include configurations in which client 40 can not make requests for transactions completed jointly pending (eg, all buffers 36 can not be in the queue at the same time). waiting for request buffers 84) on all available buffers 36 of its respective memory space 80. Although the illustration shows buffers 36 passing from one queue 82, 84 to another queue. 82, 84, the buffer 36 itself can not change location in the memory space 80. In this way, the queues 82, 84 may be "virtual queues". The queues 82, 84 can only illustrate one embodiment of the invention demonstrating the property of the respective buffers 36 during transaction processing in unidirectional memory space 80. A bidirectional memory space 86 is also illustrated and may include the available buffer queue 82 and the request buffer queue 84, as explained above, in addition to a response buffer queue 88, managed by a client 40 (not shown). Unless otherwise noted, the available buffer queue 82 and the request buffer queue 84 of the bidirectional memory space 86 operate in a manner similar to the operations described above. As shown, the response buffer queue 88 may also be configured to contain the maximum number of client-requested buffers 36 (i.e., the maximum number of buffers 36 in the memory). memory space 80, minus one). In embodiments in which there are no buffers "in addition", the requested buffer queue 88 may be configured to hold a number of buffers equal to the maximum number of buffers. The difference between the unidirectional memory space 80 and the bidirectional memory space 86 is that once the server 50 performs and / or processes the requested transaction in the queue of the client. request buffers 84, the buffer 36 is transferred to the response buffer queue 88 to be further processed by the client 40 (transaction waiting for a client interaction 40; .ex. "waiting for reply"). Upon completion of the additional processing by the client 40 in the response buffer queue 88, the buffer 36 is returned to the available buffer queue 82 for the client 40 to make further requests for data. transactions. For the purpose of the present description, buffers 36 assigned in the response buffer queue 88 may be considered "unavailable" or "busy". The response buffer queue 88 may also be a "virtual queue" as explained above. Further, embodiments of the invention may include configurations in which the client 40 can not perform jointly completed transaction requests pending on all available buffers 36 of its respective memory space 86, and therefore the The collective number of buffers allocated between the request buffer queue 84 and the response buffer queue 88 can not exceed the number of buffer memories 36 requested by the client 40. In these configurations , unidirectional memory space 80 may allow unidirectional communication, for example during a read-only transaction, while bidirectional memory space 86 may allow two-way communication, for example, during read and write operations. writing. Embodiments of the invention may include configurations in which the client 40 may initiate the transaction and the server 50 may respond with a corresponding transaction. Unidirectional and bidirectional memory spaces 80, 86 may be present in any number in embodiments of the invention and may be defined by the requested transactions, as explained above. The mechanisms for enabling communication between at least one client 40 and at least one server 50 by accessing message data in the buffer memory 36 of the shared memory 22 are described with reference to Figure 5. In Figure 5, a single client 40 and a corresponding addressable memory space 57 are shown for ease of understanding and for brevity. Embodiments of the invention may include a plurality of 40 and respective memory spaces 57 each executing similar mechanisms. In addition, by way of illustration, the plurality of buffers 36 shown have different classification states, including occupied states 44 and unoccupied states 46. In these examples, a "busy" buffer 44 may be either commanded by a client 40 being "controlled" by a server 50, the "command" designating the respective ability of the controller to directly manipulate the message data in the buffer 36. The property can be controlled and / or or managed by, for example, the client 40 or may be assigned and / or managed by the client's active access pointer 42. The client 40 and / or the active access pointer 42 (g) directs access to the plurality of buffers 36 based on a data transaction request. Thus, a first buffer 58 has been identified as a busy buffer 44 and is controlled by the client 40 through a first communication. When the client 40 has completed the transaction, or part of the transaction, with the first buffer, the client 40 may buffer, for example, "pending request" to signify that a transaction is required by the client. server 50, and terminating the first communication 64. Regardless of the transaction with the server 50, if the client 40 requests a new transaction, the active access pointer 42 will handle the client 40 communication by identifying a second buffer 60 available for transactions, and will point to the next available (e.g. unoccupied) buffer 15, presented as a second buffer 60. The client 40 may then communicate with the second buffer 60 via a second communication 66 and the second buffer 60 will have a busy state 44 (not shown). The client will then perform the second desired transaction on data stored in the second buffer 60. At the time of another transaction request by the same client 40, the mechanism is repeated so that the client's transaction request 40 which It is possible to access an unoccupied buffer 44, identified by the client 40 and / or the active access pointer 42. The mechanism illustrated in FIG. 6 is built on the mechanism shown in FIG. 5. In the present example, the client 40 has executed, for example, a read / write transaction on the first buffer 58, completed the transaction and set the tap memory 58, for example, "pending request" for 3025907 18 to signify that a transaction is required by the server 50, and now performs a transaction on the second buffer 60. The server 50 may be in the process of making transactions for a number of clients 40 t a scheduling.
[0009] For example, server 50 may be in the process of performing transactions for clients in turnstile-like scheduling, first-in-first-out scheduling, last-in-first-out scheduling, sequential scheduling, scheduling. service quality, a timed plan where each customer 40 has a defined time slot to interact, or a combination thereof. Additional scheduling algorithms and / or methods may be included to carry out a number of transactions of a client 40. In the illustrated example, when the server 50 has determined that the client 40 is to be served, the server can begin by consulting the mailslot 32 and / or the list of references 33 to identify the client 40 (which is illustrated by the communication 68). The server 50 may then consult the client 40 and / or the client's active access point 42 to determine if any transactions are required by the server 50 (which is illustrated by the communication 70). If no transaction is required by the server 50, the server 50 may continue to operate according to the plan or algorithm and, for example, may proceed to the next client to be served. However, as described above, the first buffer 58 contains a transaction to be terminated by the server 50. The client 40 and / or the active access pointer 42 identifies that the first buffer 58 is ready to be controlled. by the server 50 and may contain, for example, the location of the first buffer 58 in the shared memory 22.
[0010] Then, the active access pointer 52 of the server points to the first identified buffer memory 58 and proceeds to perform the requested transaction (as illustrated by the communication 72). When the transaction requested by the server 50 is completed, the server 50 may set the buffer 58, for example, "pending response" for another transaction, or make it available (unoccupied) for a new transaction. The server can then decouple the communication 72 from the first buffer memory 58 and can, if necessary or according to the plan, repeat the communications described above to serve buffer memories 36 of additional clients 40. In addition, embodiments of the invention may comprise a priority indicator for prioritizing the serving of particular buffers 36 by the server 50.
[0011] In addition, although a transaction request for the server 50 is explained, a client 40 may create transactions that result in queuing a plurality of requests from the server 50. Regardless of how whose server 50 responds to a plurality of server transaction requests, embodiments of the invention may include instances where all buffers 36 are busy while a client 40 or server 50 is seeking to request a server request. additional transaction. Such a scenario is illustrated in Figure 7, where there are no available or unused buffers 36. In this case, the client performs a transaction on data stored in a buffer 36, and all the other buffers 36 are occupied 48, for example pending a transaction of the server 50. In the present example, the mechanism for communicating provides that the client 40 and / or the active access pointer 30 42 and / or or the current buffer 36 will still respond to the respective transaction request by a transaction failure indication until an additional unoccupied buffer 46 is available. In this way, during a failure of the transaction request, the client can again attempt to perform the requested transaction, which can be carried out later, for example, if one or more buffers 36 are / are 46. Thus, the mechanism provides the number of transactions requested by the client 40 plus one (ie, an optional "extra buffer" 36), so that the client 40 can always having the buffer buffer over 36 for transactions, even if they are uncompleted transactions, until additional buffers 36 are available. In embodiments where no buffer "extra" is provided, the clients 40 can not have a buffer 36 to attempt to perform the requested transaction and no transaction is performed as long as one or more memory (s) buffer (s) 36 do not become / become available again (s). The mechanisms described above can only work with machine assembly language transactions and / or atomic operations, without copying data at a design level beyond the machine assembly language, especially without copying. data at the operating system level (eg "zero copy"). The embodiments described above have the technical effect that the zero copy operation is performed by directing the clients 40 and / or the servers 50, using active access pointers 42, 52, to memories. buffers 36 containing the message data, so that the message data is never "locked" or "blocked" by preventing access to other clients 40 and servers 50. In addition of a machine assembly language allows "atomic exchange" operations of references, in which the update is completed in a single atomic cycle of operation, and can not be interrupted by other updates of the data or buffer, because other updates can not be completed during a cycle of operation shorter than the atomic exchange. In this way, the exchange operations ensure that the failover of a reference to a buffer 36 succeeds or fails absolutely, so there is no risk of corruption of the reference itself, for example. example following an interruption of the exchange. The mechanism operates over the entire reach of client 40 and / or process 26, 28 and does not rely on the removal of interrupts.
[0012] By using machine assembly language instructions and basic data structures (e.g., single link lists, base pointers), the mechanisms provide asynchronous inter-process data communications between at least one server. 50 and at least one client 40, 20 in a shared memory 22, using a zero-copy data exchange, allowing "no-lock" or "no-lock" access for accessible data without complex priority configuration of a process, nor the "priority inversion" phenomenon, in which a lower priority process executing a pre-access locks the data and does not "let go" of them to become accessible even if a process of a higher priority level requires access. In fact, since operations using machine instructions tend to "the first one accessing the data to gain", higher priority level processes may still be the first to perform their operations. In addition, the mechanisms provide "no wait" access to accessible data that can be done at the process level, not just at the thread level. Embodiments of the invention may further utilize the mechanisms described above by performing programming of application programming interfaces (APIs) to access mechanisms at an operating system level ( or at application level, etc.) through APIs. This has the technical effect that the embodiments described above allow the zero copy method to prevent data lock, data blocking and / or priority override. The mechanisms described above are furthermore designed and configured so that the mailslot 32 is able to assign a certain number of requests for transactions from clients 40 and / or servers 50, even if the number of transactions requested larger than expected or wanted, or if the requests are created at a rate too high for the server to respond. In addition, the described mechanism can counter a denial of service attack in which one or more clients seek to render a machine or network resource unavailable to its intended users by overloading the server with requests for transactions. target so that it can not provide the intended service. Denial of service attacks may seek to monopolize the server 50, the client 40 and / or buffer resources 36, including bandwidth, processing means, or the ability to respond to priority transactions, or may hinder or reduce the intended service or, at worst, cause the server or resource to fail. However, upon any attempt to monopolize a resource of the mechanism described above, the possible combination of transaction request failures in addition to the server buffer 50 will prevent such a denial attack from occurring. service, without the transaction requests being able to take place without consuming resources, as described above, and without locking or blocking the respective data. An additional achievable advantage in the above embodiments is that the embodiments described above prevent the system from malfunctioning as a result of data copying operations at a language level different from the machine language. In addition, embodiments of the invention reduce the number of copies needed by using references and buffers, as described above. Another advantage of the embodiments described above includes an integrated mechanism for overwriting the oldest data present in the buffers, and thus requires no type of data management method for garbage collection. ". In addition, conventional data sharing between a server and one or more clients is accomplished by creating global data storage and protecting it with semaphores (i.e., control values). such as locked / unlocked indicators), for example at an operating system level, any other mutex or data lock protection (eg data interrupts, etc.), This can be very expensive in terms of machine time, especially if the data stored is bulky. This allows, as described here, more efficient, faster access operations without locking. On the other hand, operating systems usually do not provide semaphore protection between processes, but only between the threads of a process. Other achievable advantages in the embodiments described above include the fact that the mailslot 5 type has the flexibility to loose process coupling, it requires little coordination and does not require a "ladder boot" (c). that is, processes, a client and / or servers can intervene at any time). In addition, the implementation of the APIs described above may result in lower debugging costs for system debugging, and greater performance margins on similar hardware compared to copy processes. different. Insofar as this is not already described, the various aspects and structures of the various embodiments can be used in combination with each other at will. The fact that an aspect may not be illustrated in all embodiments does not mean that it should be interpreted as not being able to be, but this is not intended to make the description more concise. Thus, the various aspects of the various embodiments can be mixed and adapted as desired to form new embodiments, regardless of whether the new embodiments are expressly described or not. All combinations or permutations of details described herein are covered by this disclosure.
[0013] 25 3025907 25 List of marks 8 Aircraft 52 Active access pointer 10 Fuselage 54 First customer 12 Left engine system 55 First memory space 14 Addressable right engine system 16 56 Second customer 18 Replaceable equipment in 57 Second staging area (LRU) addressable 20 Server 58 First buffer 22 Memory 60 Second buffer 24 Data communication network 64 First communication 26 LRU process 66 Second communication 28 Server process 68 Communication 30 Data allocation 70 Communication 32 Mailslot 72 Communication 33 Reference list 73 34 Constant address 74 36 Plurality of memories 76 buffers 78 37 80 Unidirectional memory space 38 82 Memory queue 40 Available buffer clients 42 Active access pointer 84 Memory queue 44 Buffer busy request buffers 46 Buffer memory unoccupied 86 Bidirectional memory space 48 88 Queue memories 50 Server response buffers
权利要求:
Claims (15)
[0001]
REVENDICATIONS1. A mechanism for enabling communication between at least one client (40) and at least one server (50) by accessing message data in a shared memory (22), characterized in that it comprises: an allocation (30) of data present in the shared memory (22) to at least one mailslot (32), the allocation (30) being accessible by a predetermined constant address (34), and a series of buffers (36) for the client / each client (40) for performing transaction requests, each of the buffers (36) being controllable by either of the respective clients (40) and server (50); the mailslot (s) (32) having references identifying the client (s) (40) and the server (s) (50); each client (40) and each server (50) having an active access pointer (42, 52); and the client (s) having an active access pointer (42) which allows the client (s) (40) to directly manipulate message data using a memory (s). ) stamp (s) (36) ordered by the customer (s) (40); and the server (s) (50) having an active access pointer (52) that allows the server (s) (50) to directly manipulate message data using a / memory (s) buffer (36) controlled by the server (s) (50); the active access pointers (42, 52) being allocated among the buffers (36) only by means of atomic operations without copying the data at an operating system level.
[0002]
2. Mechanism according to claim 1, the mechanism being a flight management system. 3025907 27
[0003]
3. Mechanism according to claim 1, wherein the mailslot (s) (32) and the series of buffers (36) are predefined during the initialization of the shared memory (22).
[0004]
The mechanism of claim 1, wherein the transaction request comprises reading the data and / or writing new data to the buffer (36).
[0005]
The mechanism of claim 4, wherein at least one transaction is assigned to a unidirectional memory space (80) comprising at least one available buffer queue (82) and a buffer memory queue of request (84).
[0006]
The mechanism of claim 4, wherein at least one transaction is assigned to a bidirectional memory space (86) including at least one available buffer queue (82), a buffer memory queue of request (84) and a queue of response buffers (88).
[0007]
The mechanism of claim 1, wherein the number of buffers (36) is at least the number of transactions requested by the respective client (40), plus a buffer buffer in addition.
[0008]
A method for enabling communication between at least one client (40) and at least one server (50) by accessing message data in a shared memory (22), characterized in that it comprises: of data present in the shared memory (22) to at least one mailslot (32); assigning a single predetermined address (34) to access mailslot / each mailslot (32); The allocation of a certain number of buffers (36) for the at least one client (40), each buffer memory (36) being able to be controlled by a client (40) or controlled by a server (50); ), the number of buffers (36) being equal to the number of transactions requested by the respective client (40); and assigning a client active access pointer (42) through a client-controlled buffer (36) (40) to transition from the client-controlled buffer (36) to a client-controlled buffer (36). a server-controlled buffer (36) by enabling the server (50) to directly manipulate the message data with the aid of an active server access pointer (52); access to the message data through active access pointers of the buffers (36) without copying the message data at the operating system level.
[0009]
9. The method of claim 8, wherein the allocation of data in at least one mailslot (32), the assignment of a single predetermined address and the allocation of the number of buffers (36) for a / each customer (40) occur during the initialization of the shared memory (22).
[0010]
The method of claim 8, wherein accessing the message data comprises reading the data and / or writing new data to the buffer (36).
[0011]
The method of claim 10, wherein at least one transaction is performed in a unidirectional memory space (80) comprising at least a state portion and a message data portion.
[0012]
The method of claim 10, wherein at least one transaction is performed in a bidirectional memory space (86) comprising at least one queue of available buffers (82), a buffer queue. of requests (84) and a queue of response buffers (88).
[0013]
The method of claim 8 including initiating a new transaction request from a client (40) in a respective unoccupied buffer (36) controlled by a client (40).
[0014]
The method of claim 8, wherein the number of buffers (36) is at least equal to the number of transactions requested by the respective client (40), plus a buffer memory in addition.
[0015]
The method of claim 14, wherein the new client transaction request fails if all the respective client buffers (36) are busy.
类似技术:
公开号 | 公开日 | 专利标题
FR3025908B1|2019-07-12|MECHANISM AND METHOD FOR ACCESSING DATA IN A SHARED MEMORY
FR3025907B1|2019-07-26|MECHANISM AND METHOD FOR PROVIDING COMMUNICATION BETWEEN A CLIENT AND A SERVER BY ACCESSING SHARED MEMORY MESSAGE DATA.
US10949390B2|2021-03-16|Asynchronous queries on secondary data cores in a distributed computing system
US10684872B2|2020-06-16|Management of container host clusters
US20180046507A1|2018-02-15|Reserving a core of a processor complex for a critical task
US9575800B2|2017-02-21|Using queues corresponding to attribute values and priorities associated with units of work and sub-units of the unit of work to select the units of work and their sub-units to process
US10579550B2|2020-03-03|Low overhead exclusive control for shared memory objects
US11144213B2|2021-10-12|Providing preferential access to a metadata track in two track writes
US9563366B2|2017-02-07|Using queues corresponding to attribute values associated with units of work and sub-units of the unit of work to select the units of work and their sub-units to process
US10223164B2|2019-03-05|Execution of critical tasks based on the number of available processing entities
US20190272196A1|2019-09-05|Dispatching jobs for execution in parallel by multiple processors
US10831563B2|2020-11-10|Deadlock resolution between distributed processes using process and aggregated information
US10673937B2|2020-06-02|Dynamic record-level sharing | provisioning inside a data-sharing subsystem
WO2012038000A1|2012-03-29|Method for managing tasks in a microprocessor or in a microprocessor assembly
WO2009013437A1|2009-01-29|Method for managing the shared resources of a computer system, a module for supervising the implementation of same and a computer system having one such module
US10956228B2|2021-03-23|Task management using a virtual node
US10698785B2|2020-06-30|Task management based on an access workload
US9852075B2|2017-12-26|Allocate a segment of a buffer to each of a plurality of threads to use for writing data
Singh et al.2015|Survey on Data Processing and Scheduling in Hadoop
同族专利:
公开号 | 公开日
CN105426258B|2021-04-09|
FR3025907B1|2019-07-26|
BR102015020596A2|2016-03-22|
GB2532843A|2016-06-01|
GB201516102D0|2015-10-28|
US10560542B2|2020-02-11|
US20160080517A1|2016-03-17|
CN105426258A|2016-03-23|
JP2016062606A|2016-04-25|
GB2532843B|2018-08-29|
CA2902933A1|2016-03-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JPH05224956A|1992-02-14|1993-09-03|Nippon Telegr & Teleph Corp <Ntt>|Inter-process message communication method|
JP3489157B2|1993-11-26|2004-01-19|株式会社日立製作所|Distributed shared memory system and computer|
US6047391A|1997-09-29|2000-04-04|Honeywell International Inc.|Method for strong partitioning of a multi-processor VME backplane bus|
US6519686B2|1998-01-05|2003-02-11|Intel Corporation|Information streaming in a multi-process system using shared memory|
US6341338B1|1999-02-04|2002-01-22|Sun Microsystems, Inc.|Protocol for coordinating the distribution of shared memory|
WO2001013229A2|1999-08-19|2001-02-22|Venturcom, Inc.|System and method for data exchange|
US20020144010A1|2000-05-09|2002-10-03|Honeywell International Inc.|Communication handling in integrated modular avionics|
US7203706B2|2002-08-01|2007-04-10|Oracle International Corporation|Buffered message queue architecture for database management systems with memory optimizations and “zero copy” buffered message queue|
US7380039B2|2003-12-30|2008-05-27|3Tera, Inc.|Apparatus, method and system for aggregrating computing resources|
US7562138B2|2004-12-28|2009-07-14|Sap|Shared memory based monitoring for application servers|
US7454477B2|2005-05-16|2008-11-18|Microsoft Corporation|Zero-copy transfer of memory between address spaces|
US7589735B2|2005-08-24|2009-09-15|Innovative Solutions & Support |Aircraft flat panel display system with graphical image integrity|
JP2008077325A|2006-09-20|2008-04-03|Hitachi Ltd|Storage device and method for setting storage device|
US8347373B2|2007-05-08|2013-01-01|Fortinet, Inc.|Content filtering of remote file-system access protocols|
US8055856B2|2008-03-24|2011-11-08|Nvidia Corporation|Lock mechanism to enable atomic updates to shared memory|
FR2938671B1|2008-11-17|2011-06-03|Sagem Defense Securite|SECURE AVIONIC EQUIPMENT AND ASSOCIATED SECURITY METHOD|
US8316368B2|2009-02-05|2012-11-20|Honeywell International Inc.|Safe partition scheduling on multi-core processors|
FR2947359B1|2009-06-30|2011-07-29|St Microelectronics Grenoble 2 Sas|METHOD AND DEVICE FOR SIMULATION OF A RESET SIGNAL IN A SIMULATED CHIP SYSTEM|
US20130166271A1|2010-07-06|2013-06-27|Torkel Danielsson|Simulating and testing avionics|
US9098462B1|2010-09-14|2015-08-04|The Boeing Company|Communications via shared memory|
BR112014031915A2|2012-06-21|2017-06-27|Saab Ab|Method for managing memory access of avionics control system, avionics control system and computer program|
US9539155B2|2012-10-26|2017-01-10|Hill-Rom Services, Inc.|Control system for patient support apparatus|
EP2743830A1|2012-12-13|2014-06-18|Eurocopter España, S.A.|Flexible data communication among partitions in integrated modular avionics|
US10069779B2|2013-03-25|2018-09-04|Ge Aviation Systems Llc|Method of hybrid message passing with shared memory|
EP2784676A1|2013-03-28|2014-10-01|Eurocopter España, S.A.|DIMA extension health monitor supervisor|
WO2014174340A1|2013-04-22|2014-10-30|Chad Klippert|Aircraft flight data monitoring and reporting system and use thereof|
US9485113B2|2013-10-11|2016-11-01|Ge Aviation Systems Llc|Data communications network for an aircraft|
US9749256B2|2013-10-11|2017-08-29|Ge Aviation Systems Llc|Data communications network for an aircraft|
EP2963619A1|2014-06-30|2016-01-06|Airbus Operations GmbH|Data collection apparatus, data collection system and method for data collection in vehicles|
US9794340B2|2014-09-15|2017-10-17|Ge Aviation Systems Llc|Mechanism and method for accessing data in a shared memory|
US9537862B2|2014-12-31|2017-01-03|Vivint, Inc.|Relayed network access control systems and methods|FR3030805B1|2014-12-19|2016-12-23|Thales Sa|QUALITY OF SERVICE OF A FLIGHT MANAGEMENT SYSTEM|
US10417261B2|2016-02-18|2019-09-17|General Electric Company|Systems and methods for flexible access of internal data of an avionics system|
DE102016217100B4|2016-09-08|2019-12-24|Continental Teves Ag & Co. Ohg|Process for processing vehicle-to-X messages|
DE102016217099B4|2016-09-08|2019-12-24|Continental Teves Ag & Co. Ohg|Process for processing vehicle-to-X messages|
法律状态:
2016-09-26| PLFP| Fee payment|Year of fee payment: 2 |
2017-09-25| PLFP| Fee payment|Year of fee payment: 3 |
2018-08-22| PLFP| Fee payment|Year of fee payment: 4 |
2018-09-28| PLSC| Search report ready|Effective date: 20180928 |
2019-08-20| PLFP| Fee payment|Year of fee payment: 5 |
2020-08-19| PLFP| Fee payment|Year of fee payment: 6 |
2021-08-19| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
申请号 | 申请日 | 专利标题
US14/486,325|US10560542B2|2014-09-15|2014-09-15|Mechanism and method for communicating between a client and a server by accessing message data in a shared memory|
US14486325|2014-09-15|
[返回顶部]